Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Authorship identification of text based on attention mechanism
ZHANG Yang, JIANG Minghu
Journal of Computer Applications    2021, 41 (7): 1897-1901.   DOI: 10.11772/j.issn.1001-9081.2020101528
Abstract528)      PDF (795KB)(547)       Save
The accuracy of authorship identification based on deep neural network decreases significantly when faced with a large number of candidate authors. In order to improve the accuracy of authorship identification, a neural network consisting of fast text classification (fastText) and an attention layer was proposed, and it was combined with the continuous Part-Of-Speech (POS) n-gram features for authorship identification of Chinese novels. Compared with Text Convolutional Neural Network (TextCNN), Text Recurrent Neural Network (TextRNN), Long Short-Term Memory (LSTM) network and fastText, the experimental results show that the proposed model obtains the highest classification accuracy. Compared with the fastText model, the introduction of attention mechanism increases the accuracy corresponding to different POS n-gram features by 2.14 percentage points on average; meanwhile, the model retains the high-speed and efficiency of fastText, and the text features used by it can be applied to other languages.
Reference | Related Articles | Metrics
Intelligent recommendation method for lock mechanism in concurrent program
ZHANG Yang, DONG Shicheng
Journal of Computer Applications    2021, 41 (6): 1597-1603.   DOI: 10.11772/j.issn.1001-9081.2020121929
Abstract246)      PDF (1311KB)(287)       Save
The choices of Java locks are faced by the developers during parallel programming. To solve the problem of how to choose the appropriate lock mechanism to improve the program performance, a recommendation method named LockRec for developers of concurrent program to choose lock mechanism was proposed. Firstly, the program static analysis technology was used to analyze the use of lock mechanism in concurrent programs and determine the program feature attributes that affect the program performance. Then, the improved random forest algorithm was used to build a recommendation model of lock mechanism, so as to help the developers to choose the lock among synchronization lock, re-entrant lock, read-write lock, and stamped lock. Four existing machine learning datasets were selected to experiment with LockRec. The average accuracy of the proposed LockRec is 95.1%. In addition, the real-world concurrent programs were used to analyze the recommendation results of LockRec. The experimental results show that LockRec can effectively improve the execution efficiency of concurrent programs.
Reference | Related Articles | Metrics
Database star-join optimization for multicore CPU and GPU platforms
LIU Zhuan, HAN Ruichen, ZHANG Yansong, CHEN Yueguo, ZHANG Yu
Journal of Computer Applications    2021, 41 (3): 611-617.   DOI: 10.11772/j.issn.1001-9081.2020091430
Abstract599)      PDF (1026KB)(834)       Save
Focusing on the high execution cost of star-join between the fact table and multiple dimension tables in On-line Analytical Processing (OLAP), a star-join optimization technique was proposed for advanced multicore CPU (Central Processing Unit) and GPU (Graphics Processing Unit). Firstly, the vector index based vectorized star-join algorithm on CPU and GPU platforms was proposed for the intermediate materialization cost problem in star-join in multicore CPU and GPU platforms. Secondly, the star-join operation based on vector granularity was presented according to the vector division for CPU cache size and GPU shared memory size, so as to optimize the vector index materialization cost in star-join. Finally, the compressed vector index based star-join algorithm was proposed to compress the fixed-length vector index to the variable-length binary vector index, so as to improve the storage access efficiency of the vector index in cache under low selection rate. Experimental results show that the vectorized star-join algorithm achieves more than 40% performance improvement compared to the traditional row-wise or column-wise star-join algorithms on multicore CPU platform, and the vectorized star-join algorithm achieves more than 15% performance improvement compared to the conventional star-join algorithms on GPU platform; in the comparison with the mainstream main-memory databases and GPU databases, the optimized star-join algorithm achieves 130% performance improvement compared to the optimal main-memory database Hyper, and achieves 80% performance improvement compared to the optimal GPU database OmniSci. It can be seen that the vector index based star-join optimization technique effectively improves the multiple table join performance, and compared with the traditional optimization techniques, the vector index based vectorized processing improves the data storage access efficiency in small cache, and the compressed vector further improves the vector index access efficiency in cache.
Reference | Related Articles | Metrics
Patent text classification based on ALBERT and bidirectional gated recurrent unit
WEN Chaodong, ZENG Cheng, REN Junwei, ZHANG Yan
Journal of Computer Applications    2021, 41 (2): 407-412.   DOI: 10.11772/j.issn.1001-9081.2020050730
Abstract641)      PDF (979KB)(771)       Save
With the rapid increase in the number of patent applications, the demand for automatic classification of patent text is increasing. Most of the existing patent text classification algorithms utilize methods such as Word2vec and Global Vectors (GloVe) to obtain the word vector representation of the text, while a lot of word position information is abandoned and the complete semantics of the text cannot be expressed. In order to solve these problems, a multilevel patent text classification model named ALBERT-BiGRU was proposed by combining ALBERT (A Lite BERT) and BiGRU (Bidirectional Gated Recurrent Unit). In this model, dynamic word vector pre-trained by ALBERT was used to replace the static word vector trained by traditional methods like Word2vec, so as to improve the representation ability of the word vector. Then, the BiGRU neural network model was used for training, which preserved the semantic association between long-distance words in the patent text to the greatest extent. In the effective verification on the patent text dataset published by State Information Center, compared with Word2vec-BiGRU and GloVe-BiGRU, the accuracy of ALBERT-BiGRU was increased by 9.1 percentage points and 10.9 percentage points respectively at the department level of patent text, and was increased by 9.5 percentage points and 11.2 percentage points respectively at the big class level. Experimental results show that ALBERT-BiGRU can effectively improve the classification effect of patent texts of different levels.
Reference | Related Articles | Metrics
High-precision sparse reconstruction of CT images based on multiply residual UNet
ZHANG Yanjiao, QIAO Zhiwei
Journal of Computer Applications    2021, 41 (10): 2964-2969.   DOI: 10.11772/j.issn.1001-9081.2020121985
Abstract390)      PDF (1075KB)(463)       Save
Aiming at the problem of producing streak artifacts during sparse analytic reconstruction of Computed Tomography (CT), in order to better suppress strip artifacts, a Multiply residual UNet (Mr-UNet) network architecture was proposed based on the classical UNet network architecture. Firstly, the sparse images with streak artifacts were sparsely reconstructed by the traditional Filtered Back Projection (FBP) analytic reconstruction algorithm. Then, the reconstructed images were used as the input of the network structure, and the corresponding high-precision images were trained as the labels of the network, so that the network had a good performance of suppressing streak artifacts. Finally, the original four-layer down-sampling of the classical residual UNet was deepened to five layers, and the residual learning mechanism was introduced into the proposed model, so that each convolution unit was constructed to residual structure to improve the training performance of the network. In the experiments, 2 000 pairs of images containing images with streak artifacts and the corresponding high-precision images with the size of 256×256 were used as the dataset, among which, 1 900 pairs were used as the training set, 50 pairs were used as the verification set, and the rest were used as the test set to train the network, and verify and evaluate the network performance. The experimental results show that, compared with the traditional Total Variation (TV) minimization algorithm and the classical deep learning method of UNet, the proposed model can reduce the Root Mean Square Error (RMSE) by about 0.002 5 on average and improve the Structural SIMilarity (SSIM) by about 0.003 on average, and can retain the texture and detail information of the image better.
Reference | Related Articles | Metrics
Spatial crowdsourcing task allocation algorithm for global optimization
NIE Xichan, ZHANG Yang, YU Dunhui, ZHANG Xingsheng
Journal of Computer Applications    2020, 40 (7): 1950-1958.   DOI: 10.11772/j.issn.1001-9081.2019112025
Abstract478)      PDF (1314KB)(632)       Save
Concerning the problem that in the research of spatial crowdsourcing task allocation, the benefits of multiple participants and the global optimization of continuous task allocation are not considered, which leads to the problem of poor allocation effect, an online task allocation algorithm was proposed for the global optimization of tripartite comprehensive benefit. Firstly, the distribution of crowdsourcing objects (crowdsourcing tasks and workers) in the next time stamp was predicted based on online random forest and gated recurrent unit network. Then, a bipartite graph model was constructed based on the situation of crowdsourcing objects in the current time stamp. Finally, the optimal matching algorithm of weighted bipartite graph was used to complete the task allocation. The experimental results show that the proposed algorithm realize the global optimization of continuous task allocation. Compared with greedy algorithm, this algorithm improves the success rate of task allocation by 25.7%, the average comprehensive benefit by 32.2% and the average opportunity cost of workers by 37.8%; compared with random threshold algorithm, the algorithm improves the success rate of task allocation by 27.4%, the average comprehensive benefit by 34.7% and the average opportunity cost of workers by 40.2%.
Reference | Related Articles | Metrics
High order TV image reconstruction algorithm based on Chambolle-Pock algorithm framework
XI Yarui, QIAO Zhiwei, WEN Jing, ZHANG Yanjiao, YANG Wenjing, YAN Huiwen
Journal of Computer Applications    2020, 40 (6): 1793-1798.   DOI: 10.11772/j.issn.1001-9081.2019111955
Abstract512)      PDF (720KB)(380)       Save
The traditional Total Variation (TV) minimization algorithm is a classical iterative reconstruction algorithm based on Compressed Sensing (CS), and can accurately reconstruct images from sparse and noisy data. However, the block artifacts may be brought by the algorithm during the reconstruction of image having not obvious piecewise constant feature. Researches show that the use of High Order Total Variation (HOTV) in the image denoising can effectively suppress the block artifacts brought by the TV model. Therefore, a HOTV image reconstruction model and its Chambolle-Pock (CP) solving algorithm were proposed. Specifically, the second order TV norm was constructed by using the second order gradient, then a data fidelity constrained second order TV minimization model was designed, and the corresponding CP algorithm was derived. The Shepp-Logan phantom in wave background, grayscale gradual changing phantom and real CT phantom were used to perform the image reconstruction experiments and qualitative and quantitative analysis under ideal data projection and noisy data projection conditions. The reconstruction results of ideal data projection show that compared to the traditional TV algorithm, the HOTV algorithm can effectively suppress the block artifacts and improve the reconstruction accuracy. The reconstruction results of noisy data projection show that both the traditional TV algorithm and the HOTV algorithm have good denoising effect but the HOTV algorithm is able to protect the image edge information better and has higher anti-noise performance. The HOTV algorithm is a better reconstruction algorithm than the TV algorithm in the reconstruction of image having not obvious piecewise constant feature and obvious grayscale fluctuation feature. The proposed HOTV algorithm can be extended to CT reconstruction under different scanning modes and other imaging modalities.
Reference | Related Articles | Metrics
Interactive water flow heating simulation based on smoothed particle hydrodynamics method
WANG Jiangkun, HE Kunjin, CAO Hongfei, WANG Jinqiang, ZHANG Yan
Journal of Computer Applications    2020, 40 (5): 1409-1414.   DOI: 10.11772/j.issn.1001-9081.2019101734
Abstract352)      PDF (2338KB)(434)       Save

To solve the problems of interaction difficulty and low efficiency in traditional water flow heating simulation, a method about thermal motion simulation based on Smoothed Particle Hydrodynamics (SPH) was proposed to control the process of water flow heating interactively. Firstly, the continuous water flow was transformed into particles based on the SPH method, the particle group was used to simulate the movement of the water flow, and the particle motion was limited in the container by the collision detection method. Then, the water particles were heated by the heat conduction model of the Dirichlet boundary condition, and the motion state of the particles was updated according to the temperature of the particles in order to simulate the thermal motion of the water flow during the heating process. Finally, the editable system parameters and constraint relationships were defined, and the heating and motion processes of water flow under multiple conditions were simulated by human-computer interaction. Taking the heating simulation of solar water heater as an example, the interactivity and efficiency of the SPH method in solving the heat conduction problem were verified by modifying a few parameters to control the heating work of the water heater, which provides convenience for the applications of interactive water flow heating in other virtual scenes.

Reference | Related Articles | Metrics
Dynamic reinforcement model for driving safety based on cooperative feedback control in Internet of vehicles
HUANG Chen, CAO Jiannong, WANG Shihui, ZHANG Yan
Journal of Computer Applications    2020, 40 (4): 1209-1214.   DOI: 10.11772/j.issn.1001-9081.2019101808
Abstract370)      PDF (2663KB)(256)       Save
In Internet of Vehicles(IoV)environment,a single vehicle cannot meet all the time-sensitive driving safety requirements because of limited capability on information acquiring and processing. Cooperation among vehicles to enhance information sharing and channel access ability is inevitable. In order to solve these problems,a cooperative feedback control algorithm based dynamic reinforcement model for driving safety was proposed. Firstly,a virtual fleet cooperation model was proposed to improve the precision and expand the range of global traffic sensing,and a stable cooperation relationship was constructed among vehicles to form cooperative virtual fleet while avoiding channel congestion. Then,a joint optimization model focusing on message transmission and driving control was implemented,and the deep fusion of heterogeneous traffic data was used to maximize the safety utility of IoV. Finally,an adaptive feedback control model was proposed according to the prediction on spatial-temporal change of traffic flow,and the driving safety strategy was able to be adjusted in real-time. Simulation results demonstrate that the proposed model can obtain good performance indexes under different traffic flow distribution models, can effectively support driving assisted control system, and reduce channel congestion while maintaining driving safety.
Reference | Related Articles | Metrics
Light-weight image fusion method based on SqueezeNet
WANG Jixiao, LI Yang, WANG Jiabao, MIAO Zhuang, ZHANG Yangshuo
Journal of Computer Applications    2020, 40 (3): 837-841.   DOI: 10.11772/j.issn.1001-9081.2019081378
Abstract373)      PDF (855KB)(307)       Save
The existing deep learning based infrared and visible image fusion methods have too many parameters and require large amounts of computing resources and memory. These methods cannot meet the deployment demand of resource constrained edge devices such as cell phones and embedded devices. In order to address these problems, a light-weight image fusion method based on SqueezeNet was proposed. SqueezeNet was used to extract image features, then the weight map was obtained by these features, and the weighted fusion was performed, finally the fused image was generated. By comparing with the ResNet50 method, it is found that the proposed method compresses the model size and network parameter amount to 1/21 and 1/204 respectively, and improves the running speed to 5 times while maintaining the quality of fused images. The experimental results demonstrate that the proposed method has better fusion effect compared to existing traditional methods as well as reduces the size of fusion model and accelerates the fusion speed.
Reference | Related Articles | Metrics
Face hallucination algorithm via combined learning
XU Ruobo, LU Tao, WANG Yu, ZHANG Yanduo
Journal of Computer Applications    2020, 40 (3): 710-716.   DOI: 10.11772/j.issn.1001-9081.2019071178
Abstract460)      PDF (1595KB)(379)       Save
Most of the existing deep learning based face hallucination algorithms only use a single network partition to reconstruct high-resolution output images without considering the structural information in the face images, resulting in the lack of sufficient details in the reconstruction of vital organs on the face. Therefore, a face hallucination algorithm based on combined learning was proposed to tackle this problem. In the algorithm, the regions of interest were reconstructed independently by utilizing the advantages of different deep learning models, thus the data distribution of each face region was different to each other in the process of network training, and different sub-networks were able to obtain more accurate prior information. Firstly, for the face image, the superpixel segmentation algorithm was used to generate the facial component parts and facial background image. Secondly, the facial component image patches were independently reconstructed by the Component-Generative Adversarial Network (C-GAN) and the facial background reconstruction network was used to generate the facial background image. Thirdly, the facial component fusion network was used to adaptively fuse the facial component image patches reconstructed by two different models. Finally, the generated facial component image patches were merged into the facial background image to reconstruct the final face image. The experimental results on FEI dataset show that the Peak Signal to Noise Ratio (PSNR) of the proposed algorithm is respectively 1.23 dB and 1.11 dB higher than that of the face image hallucination algorithms Learning to hallucinate face images via Component Generation and Enhancement (LCGE) and Enhanced Discriminative Generative Adversarial Network (EDGAN). The proposed algorithm can perform combined learning of the advantages of different deep learning models to learn and reconstruct more accurate face images as well as expand the sources of image reconstruction prior information.
Reference | Related Articles | Metrics
Optimization and parallelization of Graphlet Degree Vector method
Xiangshuai SONG, Fuzhang YANG, Jiang XIE, Wu ZHANG
Journal of Computer Applications    2020, 40 (2): 398-403.   DOI: 10.11772/j.issn.1001-9081.2019081387
Abstract546)   HTML0)    PDF (742KB)(287)       Save

Graphlet Degree Vector (GDV) is an important method for studying biological networks, and can reveal the correlation between nodes in biological networks and their local network structures. However, with the increasing number of automorphic orbits that need to be researched and the expanding biological network scale, the time complexity of the GDV method will increase exponentially. To resolve this problem, based on the existing serial GDV method, the parallelization of GDV method based on Message Passing Interface (MPI) was realized. Besides, the GDV method was improved and the parallel optimization of the optimized method was realized. The calculation process was optimized to solve the problem of double counting when searching for automorphic orbits of different nodes by the improved method, at the same time, the tasks were allocated reasonably combining with the load balancing strategy. Experimental results of simulated network data and real biological network data indicate that parallel GDV method and the improved parallel GDV method both obtain better parallel performance, they can be widely applied to different types of networks with different scales, and have good scalability. As a result, they can effectively maintain the high efficiency of searching for automorphic orbits in the network.

Table and Figures | Reference | Related Articles | Metrics
Collaborative filtering recommendation algorithm based on dual most relevant attention network
ZHANG Wenlong, QIAN Fulan, CHEN Jie, ZHAO Shu, ZHANG Yanping
Journal of Computer Applications    2020, 40 (12): 3445-3450.   DOI: 10.11772/j.issn.1001-9081.2020061023
Abstract365)      PDF (948KB)(394)       Save
Item-based collaborative filtering learns user preferences from the user's historical interaction items and recommends similar new items based on the user's preferences. The existing collaborative filtering methods assume that a set of historical items that user has interacted with have the same impact on user, and all historical interaction items are considered to have the same contribution to the prediction of target item, which limits the accuracy of these recommendation methods. In order to solve the problems, a new collaborative filtering recommendation algorithm based on dual most relevant attention network was proposed, which contained two attention network layers. Firstly, the item-level attention network was used to assign different weights to different historical items in order to capture the most relevant items in the user historical interaction items. Then, the item-interaction-level attention network was used to perceive the correlation degrees of the interactions between the different historical items and the target item. Finally, the fine-grained preferences of users on the historical interaction items and the target item were simultaneously captured through the two attention network layers, so as to make the better recommendations for the next step. The experiments were conducted on two real datasets of MovieLens and Pinterest. Experimental results show that, the proposed algorithm improves the recommendation hit rate by 2.3 percentage points and 1.5 percentage points respectively compared with the benchmark model Deep Item-based Collaborative Filtering (DeepICF) algorithm, which verifies the effectiveness of the proposed algorithm on making personalized recommendations for users.
Reference | Related Articles | Metrics
Pedestrian detection method based on Movidius neural computing stick
ZHANG Yangshuo, MIAO Zhuang, WANG Jiabao, LI Yang
Journal of Computer Applications    2019, 39 (8): 2230-2234.   DOI: 10.11772/j.issn.1001-9081.2018122595
Abstract639)      PDF (729KB)(347)       Save
Movidius neural computing stick is a USB-based deep learning inference tool and a stand-alone artificial intelligence accelerator that provides dedicated deep neural network acceleration for a wide range of mobile and embedded vision devices. For the embedded application of deep learning, a near real-time pedestrian target detection method based on Movidius neural computing stick was realized. Firstly, the model size and calculation were adapted to the requirements of the embedded device by improving the RefineDet target detection network structure. Then, the model was retrained on the pedestrian detection dataset and deployed on the Raspberry Pi equipped with Movidius neural computing stick. Finally, the model was tested in the actual environment, and the algorithm achieved an average processing speed of 4 frames per second. Experimental results show that based on Movidius neural computing stick, the near real-time pedestrian detection task can be completed on the Raspberry Pi with limited computing resources.
Reference | Related Articles | Metrics
Abnormal flow monitoring of industrial control network based on convolutional neural network
ZHANG Yansheng, LI Xiwang, LI Dan, YANG Hua
Journal of Computer Applications    2019, 39 (5): 1512-1517.   DOI: 10.11772/j.issn.1001-9081.2018091928
Abstract811)      PDF (956KB)(525)       Save
Aiming at the inaccuracy of traditional abnormal flow detection model in the industrial control system, an abnormal flow detection model based on Convolutional Neural Network (CNN) was proposed. The proposed model was based on CNN algorithm and consisted of a convolutional layer, a full connection layer, a dropout layer and an output layer. Firstly, the actual collected network flow characteristic values were scaled to a range corresponding to the grayscale pixel values, and the network flow grayscale map was generated. Secondly, the generated network traffic grayscale image was put into the designed convolutional neural network structure for training and model tuning. Finally, the trained model was used to the abnormal flow detection of the industrial control network. The experimental results show that the proposed model has a recognition accuracy of 97.88%, which is 5 percentage points higher than that of Back Propagation (BP) neural network with the existing highest accuracy.
Reference | Related Articles | Metrics
Micro-expression recognition based on local region method
ZHANG Yanliang, LU Bing, HONG Xiaopeng, ZHAO Guoying, ZHANG Weitao
Journal of Computer Applications    2019, 39 (5): 1282-1287.   DOI: 10.11772/j.issn.1001-9081.2018102090
Abstract644)      PDF (917KB)(441)       Save
Micro-Expression (ME) occurrence is only related to local region of face, with very short time and subtle movement intensity. There are also some unrelated muscle movements in the face during the occurrence of micro-expressions. By using existing global method of micro-expression recognition, the spatio-temporal patterns of these unrelated changes were extracted, thereby reducing the representation capability of feature vectors, and thus affecting the recognition performance. To solve this problem, the local region method was proposed to recognize micro-expression. Firstly, according to the region with the Action Units (AU) related to the micro-expression, seven local regions related to the micro-expression were partitioned by facial key coordinates. Then, the spatio-temporal patterns of these local regions were extracted and connected in series to form feature vectors for micro-expression recognition. The experimental results of leave-one-subject-out cross validation show that the micro-expression recognition accuracy of local region method is 9.878% higher than that of global region method. The analysis of the confusion matrix of each region's recognition result shows that the proposed method makes full use of the structural information of each local region of face, effectively eliminating the influence of unrelated regions of the micro-expression on the recognition performance, and its performance of micro-expression recognition can be significantly improved compared with the global region method.
Reference | Related Articles | Metrics
Real-time multi-face landmark localization algorithm based on deep residual and feature pyramid neural network
XIE Jinheng, ZHANG Yansheng
Journal of Computer Applications    2019, 39 (12): 3659-3664.   DOI: 10.11772/j.issn.1001-9081.2019040600
Abstract475)      PDF (967KB)(309)       Save
Most face landmark detection algorithms include two steps:face detection and face landmark localization, increasing the processing time. Aiming at the problem, a one-step and real-time algorithm for multi-face landmark localization was proposed. The corresponding heatmaps were generated as data labels by the face landmark coordinates. Deep residual network was used to realize the early feature extraction of image and feature pyramid network was used to fuse the information features representing receptive fields with different scales in different network depths. And then based on intermediate supervision, multiple landmark prediction networks were cascaded to realize the one-step coarse-to-fine facial landmark regression without face detection. With high accuracy localization, a forward propagation of the proposed algorithm only takes about 0.0075 s (133 frames per second), satisfying the requirement of real-time facial landmark localization. And the proposed algorithm has achieved the mean error of 6.06% and failure rate of 11.70% on Wider Facial Landmarks in-the-Wild (WFLW) dataset.
Reference | Related Articles | Metrics
Data race detection approach in concurrent programs
ZHANG Yang, LIANG Yanan, ZHANG Dongwen, SUN Shixin
Journal of Computer Applications    2019, 39 (1): 61-65.   DOI: 10.11772/j.issn.1001-9081.2018071605
Abstract566)      PDF (857KB)(292)       Save
Aiming at the problems of false positive and false negatives in data race detection, a novel static data race detection approach was proposed. Firstly, intra-thread and inter-thread function call graphs were automatically constructed via control flow analysis. Secondly, the information of variable-access events within thread were collected, and possible races were detected based on the defined data race conditions. Then, in order to improve the detection accuracy, alias variables and alias locks were analyzed to reduce false negatives and false positives, respectively. Finally, the sequential relationship between access events was abstracted through control flow analysis, and program slicing was used to determine the happens-before relationship of access events, thereby reducing false positives caused by ignoring thread interactions. A data race detection tool was implemented by Java and Soot framework based on this approach. In the experimentation, several benchmarks from JGF and IBM Contest benchmark suites, such as raytracer and airline, were selected for evaluation, and the results were compared with existing data race detection algorithm and tool (HB (Happens-Before) and RVPredict). The experimental results show that, compared with algorithm HB and tool RVPredict, total number of data races detected by the proposed approach are increased by 81% and 16% respectively, the accuracy of this approach for data race detection are respectively increased by 14% and 19%, which effectively avoids false negatives and false positives.
Reference | Related Articles | Metrics
Firefly algorithm based on uniform local search and variable step size
WANG Xiaojing, PENG Hu, DENG Changshou, HUANG Haiyan, ZHANG Yan, TAN Xujie
Journal of Computer Applications    2018, 38 (3): 715-721.   DOI: 10.11772/j.issn.1001-9081.2017082039
Abstract484)      PDF (1137KB)(481)       Save
Since the convergence speed of the Firefly Algorithm (FA) is slow, and the solution accuracy of the FA is low, an improved Firefly Algorithm with Uniform local search and Variable step size (UVFA) was proposed. Firstly, uniform local search was established by the uniform design theory to accelerate convergence and to enhance exploitation ability. Secondly, search step size was dynamically tuned by using the variable step size strategy to balance exploration and exploitation. Finally, uniform local search and variable step size were fused. The results of simulation tests on twelve benchmark functions show that the objective function mean of UVFA was significantly better than FA, WSSFA (Wise Step Strategy for Firefly Algorithm), VSSFA (Variable Step Size Firefly Algorithm) and Uniform local search Firefly Algorithm (UFA), and the time complexity was obviously reduced. UVFA is good at solving low dimensional and high dimensional problems, and has good robustness.
Reference | Related Articles | Metrics
Backtracking-based conjugate gradient iterative hard thresholding reconstruction algorithm
ZHANG Yanfeng, FAN Xi'an, YIN Zhiyi, JIANG Tiegang
Journal of Computer Applications    2018, 38 (12): 3580-3583.   DOI: 10.11772/j.issn.1001-9081.2018040822
Abstract585)      PDF (696KB)(384)       Save
For the Backtracking-based Iterative Hard Thresholding algorithm (BIHT) has the problems of large number of iterations and too long reconstruction time, a Backtracking-based Conjugate Gradient Iterative Hard Thresholding algorithm (BCGIHT) was proposed. Firstly, the idea of backtracking was adopted in each iteration, and the support set of the previous iteration was combined with the current support set to form a candidate set. Then, a new support set was selected in the space spanned by the matrix columns corresponding to the candidate set, so as to reduce times that the support set was selected repeatedly and ensure that the correct support set was found quickly. Finally, according to the criteria of whether or not the support set of the last iteration was equal to the support set of the next iteration, gradient descent method or conjugate gradient method was used to be the optimization method, so as to accelerate the convergence of algorithm. The reconstruction experimental results of one-dimensional random Gaussian signals show that, the reconstruction success rate of BCGIHT is higher than that of BIHT and similar algorithms, and its reconstruction time is less than that of BIHT by at least 25%. The reconstruction experiment results of Pepper image show that, the reconstruction accuracy and the anti-noise performance of the proposed BCGIHT algorithm is comparable with BIHT and similar algorithms, and its reconstruction time is reduced by more than 50% compared with BIHT.
Reference | Related Articles | Metrics
Energy-efficient micro base station deployment method in heterogeneous network with quality of service constraints
ZHANG Yangyang, TANG Hongbo, YOU Wei, WANG Xiaolei, ZHAO Yu
Journal of Computer Applications    2017, 37 (8): 2133-2138.   DOI: 10.11772/j.issn.1001-9081.2017.08.2133
Abstract409)      PDF (967KB)(473)       Save
Aiming at the problem of high energy consumption caused by the increase of base station density in heterogeneous dense network, an energy-efficient method for micro base station deployment in heterogeneous networks was proposed. Firstly, the feasibility of micro base station positions was considered to mitigate the effects of environmental conditions. Then the optimization target value was weighed under different user distribution probability to enhance adaptability for different user distribution scenarios. Finally, an energy-efficient deployment algorithm for micro base stations was proposed by jointly optimizing the number, deployment position and power configuration of micro base stations. Simulation results show that the proposed method improves energy efficiency by up to 26% compared with the scheme which only optimizes the number and location of micro base stations. The experimental results demonstrate that the combined optimization method can improve the energy efficiency of the system compared with the deployment method without considering the power factor, and verifies the influence of the micro base station power on the energy efficiency of heterogeneous network.
Reference | Related Articles | Metrics
Opinion formation model of social network based on node intimacy and influence
ZHANG Yanan, SUN Shibao, ZHANG Jingshan, YIN Lihang, YAN Xiaolong
Journal of Computer Applications    2017, 37 (4): 1083-1087.   DOI: 10.11772/j.issn.1001-9081.2017.04.1083
Abstract588)      PDF (778KB)(670)       Save
Aiming at the universality of individual interaction and the heterogeneity of individual social influence in opinion spreading, an opinion formation model of social network was proposed on the basis of Hegselmann-Krause model. By introducing the concepts of intimacy between individuals, interpersonal similarity and interaction strength, the individual interactive set was extended, the influence weight was reasonably quantified, and more realistic view of interaction rule was built. Through a series of simulation experiments, the effects of main parameters in the model on opinion evolution were analyzed. The simulation results indicate that group views can converge to the same and form consensus under different confidence thresholds. And the larger the confidence threshold is, the shorter the convergence time is. When confidence threshold is 0.2, convergence time is only 10. Meanwhile, extending the interactive set and increasing the strength of interpersonal similarity will promote consensus formation. Besides, when the clustering coefficient and the average degree of scale-free network are higher, the group views are more likely to produce convergence effect. The results are helpful to understand the dynamic process of opinion formation, and can guide social managers to make decisions and analysis.
Reference | Related Articles | Metrics
Trajectory data clustering algorithm based on spatio-temporal pattern
SHI Lukui, ZHANG Yanru, ZHANG Xin
Journal of Computer Applications    2017, 37 (3): 854-859.   DOI: 10.11772/j.issn.1001-9081.2017.03.854
Abstract1452)      PDF (1146KB)(957)       Save
Because the existing trajectory clustering algorithms in the similarity measurement usually used the spatial characteristics as the standards the characteristics lacking the consideration of temporal, a trajectory data clustering algorithm based on spatial-temporal pattern was proposed. The proposed algorithm was based on partition-and-group framework. Firstly, the trajectory feature points were extracted by using the curve edge detection method. Then the sub-trajectory segments were divided according to the trajectory feature points. Finally, the clustering algorithm based on density was used according to the spatio-temporal similarity between sub-trajectory segments. The experimental results show that the trajectory feature points extracted using the proposed algorithm are more accurate to describe the trajectory structure under the premise that the feature points have better simplicity. At the same time, the similarity measurement based on spatio-temporal feature obtains better clustering result by taking into account both spatial and temporal characteristics of trajectory.
Reference | Related Articles | Metrics
Weibo users credibility evaluation based on user relationships
LI Fumin, TONG Lingling, DU Cuilan, LI Yangxi, ZHANG Yangsen
Journal of Computer Applications    2017, 37 (3): 654-659.   DOI: 10.11772/j.issn.1001-9081.2017.03.654
Abstract553)      PDF (972KB)(408)       Save
With the deepening of Weibo research, credibility evaluation of Weibo users has become a research hotspot. Aiming at the problem of Weibo users' credibility evaluation, a user confidence analysis method based on association was proposed. Taking Sina Weibo as the research object, firstly, seven characteristics of the user from three aspects: user information, interactive information and behavior information were analyzed, and the user self-evaluation credibility was got by using Analytic Hierarchy Process (AHP). Then, by using the user self-evaluation as the base point, the user relationship network as the carrier, and the potential users' evaluation relationship among the users, was improved the PageRank algorithm, and the user credibility evaluation model called User-Rank was proposed. The proposed model was used to evaluate comprehensively credibility of users by other users in relational network. Experiments on large scale Weibo real data show that the proposed method can obtain good evaluation results of user credibility.
Reference | Related Articles | Metrics
3D simultaneous localization and mapping for mobile robot based on VSLAM
LIN Huican, LYU Qiang, WANG Guosheng, ZHANG Yang, LIANG Bing
Journal of Computer Applications    2017, 37 (10): 2884-2887.   DOI: 10.11772/j.issn.1001-9081.2017.10.2884
Abstract693)      PDF (829KB)(661)       Save
The Simultaneous Localization And Mapping (SLAM) is an essential skill for mobile robots exploring in unknown environments without external referencing systems. As the sparse map constructed by feature-based Visual SLAM (VSLAM) algorithm is not suitable for robot application, an efficient and compact map construction algorithm based on octree structure was proposed. First, according to the pose and depth data of the keyframes, the point cloud map of the scene corresponding to the image was constructed, and then the map was processed by the octree map technique, and a map suitable for the application of the robot was constructed. Comparing the proposed algorithm with RGB-Depth SLAM (RGB-D SLAM) algorithm, ElasticFusion algorithm and Oriented FAST and Rotated BRIEF SLAM (ORB-SLAM) algorithm on publicly available benchmark datasets, the results show that the proposed algorithm has high validity, accuracy and robustness. Finally, the autonomous mobile robot was built, and the improved VSLAM system was applied to the mobile robot. It can complete autonomous obstacle avoidance and 3D map construction in real-time, and solve the problem that the sparse map cannot be used for obstacle avoidance and navigation.
Reference | Related Articles | Metrics
Dynamic sampling method for wireless sensor network based on compressive sensing
SONG Yang, HUANG Zhiqing, ZHANG Yanxin, LI Mengjia
Journal of Computer Applications    2017, 37 (1): 183-187.   DOI: 10.11772/j.issn.1001-9081.2017.01.0183
Abstract639)      PDF (948KB)(437)       Save
It is hard to obtain a satisfactory reconstructive quality while compressing time-varying signals monitored by Wireless Sensor Network (WSN) using Compressive Sensing (CS), therefore a novel dynamic sampling method based on data prediction and sampling rate feedback control was proposed. Firstly, the sink node acquired the changing trend by analyzing the liner degree differences between current reconstructed data and last reconstructed data. Then the sink node calculated the suitable sampling rate according to the changing trend and fed back the result to sensors to dynamically adjust their sampling process. The experimental results show that the proposed dynamic sampling method can acquire higher reconstructed data accuracy than the CS data gathering method based on static sampling rate for WSN.
Reference | Related Articles | Metrics
Chinese speech segmentation method based on Gauss distribution of time spans of syllables
ZHANG Yang, ZHAO Xiaoqun, WANG Digang
Journal of Computer Applications    2016, 36 (5): 1410-1414.   DOI: 10.11772/j.issn.1001-9081.2016.05.1410
Abstract673)      PDF (957KB)(348)       Save
So far away, there is no accurate method for Chinese natural speech segmentation of syllables,which is meaningful in labeling speech with reference text instead of people. According to two hypotheses that time spans of Chinese syllables under the same pronunciation obey Gauss distribution and short-time energy valley exists between two adjacent syllables, Chinese speech segmentation method based on Gauss distribution of time spans of syllables was proposed. A simplified method based on distribution of energy valleys was given, which effectively reduced the time complexity of this speech segmentation method. The experimental results show that segmentation accuracy (mean square value of time spans between artificial labels and labels created by this method) achieve 10 -3 and computing times are less than 1 s in Matlab of PC.
Reference | Related Articles | Metrics
Malicious domain detection based on multiple-dimensional features
ZHANG Yang, LIU Tingwen, SHA Hongzhou, SHI Jinqiao
Journal of Computer Applications    2016, 36 (4): 941-944.   DOI: 10.11772/j.issn.1001-9081.2016.04.0941
Abstract764)      PDF (688KB)(754)       Save
Domain Name System (DNS) provides domain name resolution service, i.e., converting domain names to IP addresses. Malicious domain detection is mainly for discovering illegal activities and ensuring the normal operation of the domain name servers. Prior work on malicious domain name detection was summarized, and a new machine learning based malicious domain detection algorithm for exploiting multiple-dimensional features was further proposed. With respect to domain name lexical features, more fine-grained features were extracted, such as the conversion frequency of the numbers and letters and the maximum length of continuous letters. As for the network attribute features, more attentions were paid to the name servers, such as the quantity, and the degree of dispersion. The experimental results show that the accuracy, recall rate, F1 value of the proposed method reaches 99.8%, which means a better performance on malicious domain name detection.
Reference | Related Articles | Metrics
Personal relation extraction based on text headline
YAN Yang, ZHAO Jiapeng, LI Quangang, ZHANG Yang, LIU Tingwen, SHI Jinqiao
Journal of Computer Applications    2016, 36 (3): 726-730.   DOI: 10.11772/j.issn.1001-9081.2016.03.726
Abstract755)      PDF (754KB)(719)       Save
In order to overcome the non-person entity's interference, the difficulties in selection of feature words and muti-person influence on target personal relation extraction, this paper proposed person judgment based on decision tree, relation feature word generation based on minimum set cover and statistical approach based on three-layer sentence pattern rules. In the first step, 18 features were extracted from attribute files of China Conference on Machine Learning (CCML) competition 2015, C4.5 decision was used as the classifier, then 98.2% of recall rate and 92.6% of precision rate were acquired. The results of this step were used as the next step's input. Next, the algorithm based on minimum set cover was used. The feature word set covers all the personal relations as the scale of feature word set is maintained at a proper level, which is used to identify the relation type in text headline. In the last step, a method based on statistics of three-layer sentence pattern rules was used to filter small proportion rules and specify the sentence pattern rules based on positive and negative proportions to judge whether the personal relation is correct or not. The experimental result shows the approach acquires 82.9% in recall rate and 74.4% in precision rate and 78.4% in F1-measure, so the proposed method can be applied to personal relation extraction from text headlines, which helps to construct personal relation knowledge graph.
Reference | Related Articles | Metrics
Chinese speech segmentation into syllables based on energies in different times and frequencies
ZHANG Yang, ZHAO Xiaoqun, WANG Digang
Journal of Computer Applications    2016, 36 (11): 3222-3228.   DOI: 10.11772/j.issn.1001-9081.2016.11.3222
Abstract609)      PDF (1015KB)(478)       Save
Precise speech segmentation methods, which can also greatly improve the efficiency of corpus annotation works, are helpful in comparing voice with voice models in speech recognition. A new Chinese speech segmentation into syllables based on the feature of time-frequency-dimensional energy was proposed:firstly, silence frames were searched in traditional way; secondly, unvoiced frames were sought using the difference of energies in different frequencies; thirdly, the voiced frames and speech frames were looked for with the help of 0-1 energies in special frequency ranges; finally, syllable positions were given depending on the judgements above. The experimental results show that the proposed method whose syllable error is 0.0297 s and syllable deviation is 7.93% is superior to Merging-Based Syllable Detection Automaton (MBSDA) and method of Gauss fitting.
Reference | Related Articles | Metrics